34,312 research outputs found

    Corporate governance and the weighting of performance measures in CEO compensation

    Get PDF
    We empirically examine how corporate governance affects the structure of executive compensation contracts. In particular, we analyze the implicit weights of firm performance measures in explaining CEO compensation. We find that weaker corporate governance is associated with compensation contracts that put more weight on accounting-based measures of performance (i.e., return on assets) than on stock-based performance measures (i.e., market returns). This finding is consistent with CEOs in firms with weaker governance structures -where the CEO has more influence over the contracting process- choosing to weight more heavily those performance measures that they are better able to control. To further examine the implications of these results, we investigate the association between variation in compensation and governance and find that weaker governance is associated with lower variance in compensation. We also find that executive compensation contracts in firms with weaker governance rely more on cash compensation at the expense of stock-based compensation.corporate governance; executive compensation; compensation contracts design;

    The Agriculture of Mexico After Ten Years of Nafta Implementation

    Get PDF
    The inclusion of the agrarian sector in the North American Foreign Trade Agreement (NAFTA) has created controversy since the beginning of negotiations. Mexico’s official vision has been that free trade, as well as agricultural reforms initiated in the country in the late eighties would transform the sector and increase national income; NAFTA opponents, on the other hand, claim that the Agreement has resulted in food dependency, massive rural migration and aggravated poverty. This paper present the main results of our econometric research on the true outcomes of nearly ten years into the NAFTA and around fifteen years of agrarian reforms, in terms of prices, trade and domestic agricultural production. Our findings suggest that the much-expected transformation of the Mexican agricultural sector has not occurred.

    Faster ASV decomposition for orthogonal polyhedra using the Extreme Vertices Model (EVM)

    Get PDF
    The alternating sum of volumes (ASV) decomposition is a widely used technique for converting a B-Rep into a CSG model. The obtained CSG tree has convex primitives at its leaf nodes, while the contents of its internal nodes alternate between the set union and difference operators. This work first shows that the obtained CSG tree T can also be expressed as the regularized Exclusive-OR operation among all the convex primitives at the leaf nodes of T, regardless the structure and internal nodes of T. This is an important result in the case in which EVM represented orthogonal polyhedra are used because in this model the Exclusive-OR operation runs much faster than set union and difference operations. Therefore this work applies this result to EVM represented orthogonal polyhedra. It also presents experimental results that corroborate the theoretical results and includes some practical uses for the ASV decomposition of orthogonal polyhedra.Postprint (published version

    Feature selection for microarray gene expression data using simulated annealing guided by the multivariate joint entropy

    Get PDF
    In this work a new way to calculate the multivariate joint entropy is presented. This measure is the basis for a fast information-theoretic based evaluation of gene relevance in a Microarray Gene Expression data context. Its low complexity is based on the reuse of previous computations to calculate current feature relevance. The mu-TAFS algorithm --named as such to differentiate it from previous TAFS algorithms-- implements a simulated annealing technique specially designed for feature subset selection. The algorithm is applied to the maximization of gene subset relevance in several public-domain microarray data sets. The experimental results show a notoriously high classification performance and low size subsets formed by biologically meaningful genes.Postprint (published version

    Convolutional Neural Networks Via Node-Varying Graph Filters

    Full text link
    Convolutional neural networks (CNNs) are being applied to an increasing number of problems and fields due to their superior performance in classification and regression tasks. Since two of the key operations that CNNs implement are convolution and pooling, this type of networks is implicitly designed to act on data described by regular structures such as images. Motivated by the recent interest in processing signals defined in irregular domains, we advocate a CNN architecture that operates on signals supported on graphs. The proposed design replaces the classical convolution not with a node-invariant graph filter (GF), which is the natural generalization of convolution to graph domains, but with a node-varying GF. This filter extracts different local features without increasing the output dimension of each layer and, as a result, bypasses the need for a pooling stage while involving only local operations. A second contribution is to replace the node-varying GF with a hybrid node-varying GF, which is a new type of GF introduced in this paper. While the alternative architecture can still be run locally without requiring a pooling stage, the number of trainable parameters is smaller and can be rendered independent of the data dimension. Tests are run on a synthetic source localization problem and on the 20NEWS dataset.Comment: Submitted to DSW 2018 (IEEE Data Science Workshop

    Efficient resources assignment schemes for clustered multithreaded processors

    Get PDF
    New feature sizes provide larger number of transistors per chip that architects could use in order to further exploit instruction level parallelism. However, these technologies bring also new challenges that complicate conventional monolithic processor designs. On the one hand, exploiting instruction level parallelism is leading us to diminishing returns and therefore exploiting other sources of parallelism like thread level parallelism is needed in order to keep raising performance with a reasonable hardware complexity. On the other hand, clustering architectures have been widely studied in order to reduce the inherent complexity of current monolithic processors. This paper studies the synergies and trade-offs between two concepts, clustering and simultaneous multithreading (SMT), in order to understand the reasons why conventional SMT resource assignment schemes are not so effective in clustered processors. These trade-offs are used to propose a novel resource assignment scheme that gets and average speed up of 17.6% versus Icount improving fairness in 24%.Peer ReviewedPostprint (published version

    Informational structures and informational fields as a prototype for the description of postulates of the integrated information theory

    Get PDF
    Informational Structures (IS) and Informational Fields (IF) have been recently introduced to deal with a continuous dynamical systems-based approach to Integrated Information Theory (IIT). IS and IF contain all the geometrical and topological constraints in the phase space. This allows one to characterize all the past and future dynamical scenarios for a system in any particular state. In this paper, we develop further steps in this direction, describing a proper continuous framework for an abstract formulation, which could serve as a prototype of the IIT postulates.National Science Center of PolandUMO-2016/22/A/ST1/00077Junta de AndalucíaMinisterio de Economia, Industria y Competitividad (MINECO). Españ
    • …
    corecore